6 research outputs found

    Transfer Learning for Multi-surrogate-model Optimization

    Get PDF
    Surrogate-model-based optimization is widely used to solve black-box optimization problems if the evaluation of a target system is expensive. However, when the optimization budget is limited to a single or several evaluations, surrogate-model-based optimization may not perform well due to the lack of knowledge about the search space. In this case, transfer learning helps to get a good optimization result due to the usage of experience from the previous optimization runs. And if the budget is not strictly limited, transfer learning is capable of improving the final results of black-box optimization. The recent work in surrogate-model-based optimization showed that using multiple surrogates (i.e., applying multi-surrogate-model optimization) can be extremely efficient in complex search spaces. The main assumption of this thesis suggests that transfer learning can further improve the quality of multi-surrogate-model optimization. However, to the best of our knowledge, there exist no approaches to transfer learning in the multi-surrogate-model context yet. In this thesis, we propose an approach to transfer learning for multi-surrogate-model optimization. It encompasses an improved method of defining the expediency of knowledge transfer, adapted multi-surrogate-model recommendation, multi-task learning parameter tuning, and few-shot learning techniques. We evaluated the proposed approach with a set of algorithm selection and parameter setting problems, comprising mathematical functions optimization and the traveling salesman problem, as well as random forest hyperparameter tuning over OpenML datasets. The evaluation shows that the proposed approach helps to improve the quality delivered by multi-surrogate-model optimization and ensures getting good optimization results even under a strictly limited budget.:1 Introduction 1.1 Motivation 1.2 Research objective 1.3 Solution overview 1.4 Thesis structure 2 Background 2.1 Optimization problems 2.2 From single- to multi-surrogate-model optimization 2.2.1 Classical surrogate-model-based optimization 2.2.2 The purpose of multi-surrogate-model optimization 2.2.3 BRISE 2.5.0: Multi-surrogate-model-based software product line for parameter tuning 2.3 Transfer learning 2.3.1 Definition and purpose of transfer learning 2.4 Summary of the Background 3 Related work 3.1 Questions to transfer learning 3.2 When to transfer: Existing approaches to determining the expediency of knowledge transfer 3.2.1 Meta-features-based approaches 3.2.2 Surrogate-model-based similarity 3.2.3 Relative landmarks-based approaches 3.2.4 Sampling landmarks-based approaches 3.2.5 Similarity threshold problem 3.3 What to transfer: Existing approaches to knowledge transfer 3.3.1 Ensemble learning 3.3.2 Search space pruning 3.3.3 Multi-task learning 3.3.4 Surrogate model recommendation 3.3.5 Few-shot learning 3.3.6 Other approaches to transferring knowledge 3.4 How to transfer (discussion): Peculiarities and required design decisions for the TL implementation in multi-surrogate-model setup 3.4.1 Peculiarities of model recommendation in multi-surrogate-model setup 3.4.2 Required design decisions in multi-task learning 3.4.3 Few-shot learning problem 3.5 Summary of the related work analysis 4 Transfer learning for multi-surrogate-model optimization 4.1 Expediency of knowledge transfer 4.1.1 Experiments’ similarity definition as a variability point 4.1.2 Clustering to filter the most suitable experiments 4.2 Dynamic model recommendation in multi-surrogate-model setup 4.2.1 Variable recommendation granularity 4.2.2 Model recommendation by time and performance criteria 4.3 Multi-task learning 4.4 Implementation of the proposed concept 4.5 Conclusion of the proposed concept 5 Evaluation 5.1 Benchmark suite 5.1.1 APSP for the meta-heuristics 5.1.2 Hyperparameter optimization of the Random Forest algorithm 5.2 Environment setup 5.3 Evaluation plan 5.4 Baseline evaluation 5.5 Meta-tuning for a multi-task learning approach 5.5.1 Revealing the dependencies between the parameters of multi-task learning and its performance 5.5.2 Multi-task learning performance with the best found parameters 5.6 Expediency determination approach 5.6.1 Expediency determination as a variability point 5.6.2 Flexible number of the most similar experiments with the help of clustering 5.6.3 Influence of the number of initial samples on the quality of expediency determination 5.7 Multi-surrogate-model recommendation 5.8 Few-shot learning 5.8.1 Transfer of the built surrogate models’ combination 5.8.2 Transfer of the best configuration 5.8.3 Transfer from different experiment instances 5.9 Summary of the evaluation results 6 Conclusion and Future wor

    Intelligent Workload Scheduling in Distributed Computing Environment for Balance between Energy Efficiency and Performance

    Get PDF
    Global digital transformation requires more productive large-scale distributed systems. Such systems should meet lots of requirements, such as high availability, low latency and reliability. However, new challenges become more and more important nowadays. One of them is energy efficiency of large-scale computing systems. Many service providers prefer to use cheap commodity servers in their distributed infrastructure, which makes the problem of energy efficiency even harder because of hardware inhomogeneity. In this chapter an approach to finding balance between performance and energy efficiency requirements within inhomogeneous distributed computing environment is proposed. The main idea of the proposed approach is to use each node’s individual energy consumption models in order to generate distributed system scaling patterns based on the statistical daily workload and then adjust these patterns to match the current workload while using energy-aware Power Consumption and Performance Balance (PCPB) scheduling algorithm. An approach is tested using Matlab modeling. As a result of applying the proposed approach, large-scale distributed computing systems save energy while maintaining a fairly high level of performance and meeting the requirements of the service-level agreement (SLA)

    Power Consumption and Performance Balance (PCPB) scheduling algorithm for computer cluster

    Get PDF
    The problem of energy consumption is topical in every sphere of human activity nowadays. This problem is especially important for IT (information technologies), where the amount of data that needs to be processed is growing every day. In this paper PCPB (Power Consumption and Performance Balance) algorithm has been proposed. It is an energy-aware scheduling algorithm which is aimed at reducing power consumption of the computer cluster without its performance reducing. The main idea of the proposed approach is to use energy model (as relation between CPU load of the node and its consumed power) for each node in the cluster and find the balance between power consumption and performance while tasks allocating. To compare PCPB with other algorithms mathematical model was developed

    Transfer Learning for Multi-surrogate-model Optimization

    No full text
    Surrogate-model-based optimization is widely used to solve black-box optimization problems if the evaluation of a target system is expensive. However, when the optimization budget is limited to a single or several evaluations, surrogate-model-based optimization may not perform well due to the lack of knowledge about the search space. In this case, transfer learning helps to get a good optimization result due to the usage of experience from the previous optimization runs. And if the budget is not strictly limited, transfer learning is capable of improving the final results of black-box optimization. The recent work in surrogate-model-based optimization showed that using multiple surrogates (i.e., applying multi-surrogate-model optimization) can be extremely efficient in complex search spaces. The main assumption of this thesis suggests that transfer learning can further improve the quality of multi-surrogate-model optimization. However, to the best of our knowledge, there exist no approaches to transfer learning in the multi-surrogate-model context yet. In this thesis, we propose an approach to transfer learning for multi-surrogate-model optimization. It encompasses an improved method of defining the expediency of knowledge transfer, adapted multi-surrogate-model recommendation, multi-task learning parameter tuning, and few-shot learning techniques. We evaluated the proposed approach with a set of algorithm selection and parameter setting problems, comprising mathematical functions optimization and the traveling salesman problem, as well as random forest hyperparameter tuning over OpenML datasets. The evaluation shows that the proposed approach helps to improve the quality delivered by multi-surrogate-model optimization and ensures getting good optimization results even under a strictly limited budget.:1 Introduction 1.1 Motivation 1.2 Research objective 1.3 Solution overview 1.4 Thesis structure 2 Background 2.1 Optimization problems 2.2 From single- to multi-surrogate-model optimization 2.2.1 Classical surrogate-model-based optimization 2.2.2 The purpose of multi-surrogate-model optimization 2.2.3 BRISE 2.5.0: Multi-surrogate-model-based software product line for parameter tuning 2.3 Transfer learning 2.3.1 Definition and purpose of transfer learning 2.4 Summary of the Background 3 Related work 3.1 Questions to transfer learning 3.2 When to transfer: Existing approaches to determining the expediency of knowledge transfer 3.2.1 Meta-features-based approaches 3.2.2 Surrogate-model-based similarity 3.2.3 Relative landmarks-based approaches 3.2.4 Sampling landmarks-based approaches 3.2.5 Similarity threshold problem 3.3 What to transfer: Existing approaches to knowledge transfer 3.3.1 Ensemble learning 3.3.2 Search space pruning 3.3.3 Multi-task learning 3.3.4 Surrogate model recommendation 3.3.5 Few-shot learning 3.3.6 Other approaches to transferring knowledge 3.4 How to transfer (discussion): Peculiarities and required design decisions for the TL implementation in multi-surrogate-model setup 3.4.1 Peculiarities of model recommendation in multi-surrogate-model setup 3.4.2 Required design decisions in multi-task learning 3.4.3 Few-shot learning problem 3.5 Summary of the related work analysis 4 Transfer learning for multi-surrogate-model optimization 4.1 Expediency of knowledge transfer 4.1.1 Experiments’ similarity definition as a variability point 4.1.2 Clustering to filter the most suitable experiments 4.2 Dynamic model recommendation in multi-surrogate-model setup 4.2.1 Variable recommendation granularity 4.2.2 Model recommendation by time and performance criteria 4.3 Multi-task learning 4.4 Implementation of the proposed concept 4.5 Conclusion of the proposed concept 5 Evaluation 5.1 Benchmark suite 5.1.1 APSP for the meta-heuristics 5.1.2 Hyperparameter optimization of the Random Forest algorithm 5.2 Environment setup 5.3 Evaluation plan 5.4 Baseline evaluation 5.5 Meta-tuning for a multi-task learning approach 5.5.1 Revealing the dependencies between the parameters of multi-task learning and its performance 5.5.2 Multi-task learning performance with the best found parameters 5.6 Expediency determination approach 5.6.1 Expediency determination as a variability point 5.6.2 Flexible number of the most similar experiments with the help of clustering 5.6.3 Influence of the number of initial samples on the quality of expediency determination 5.7 Multi-surrogate-model recommendation 5.8 Few-shot learning 5.8.1 Transfer of the built surrogate models’ combination 5.8.2 Transfer of the best configuration 5.8.3 Transfer from different experiment instances 5.9 Summary of the evaluation results 6 Conclusion and Future wor

    Transfer Learning for Multi-surrogate-model Optimization

    No full text
    Surrogate-model-based optimization is widely used to solve black-box optimization problems if the evaluation of a target system is expensive. However, when the optimization budget is limited to a single or several evaluations, surrogate-model-based optimization may not perform well due to the lack of knowledge about the search space. In this case, transfer learning helps to get a good optimization result due to the usage of experience from the previous optimization runs. And if the budget is not strictly limited, transfer learning is capable of improving the final results of black-box optimization. The recent work in surrogate-model-based optimization showed that using multiple surrogates (i.e., applying multi-surrogate-model optimization) can be extremely efficient in complex search spaces. The main assumption of this thesis suggests that transfer learning can further improve the quality of multi-surrogate-model optimization. However, to the best of our knowledge, there exist no approaches to transfer learning in the multi-surrogate-model context yet. In this thesis, we propose an approach to transfer learning for multi-surrogate-model optimization. It encompasses an improved method of defining the expediency of knowledge transfer, adapted multi-surrogate-model recommendation, multi-task learning parameter tuning, and few-shot learning techniques. We evaluated the proposed approach with a set of algorithm selection and parameter setting problems, comprising mathematical functions optimization and the traveling salesman problem, as well as random forest hyperparameter tuning over OpenML datasets. The evaluation shows that the proposed approach helps to improve the quality delivered by multi-surrogate-model optimization and ensures getting good optimization results even under a strictly limited budget.:1 Introduction 1.1 Motivation 1.2 Research objective 1.3 Solution overview 1.4 Thesis structure 2 Background 2.1 Optimization problems 2.2 From single- to multi-surrogate-model optimization 2.2.1 Classical surrogate-model-based optimization 2.2.2 The purpose of multi-surrogate-model optimization 2.2.3 BRISE 2.5.0: Multi-surrogate-model-based software product line for parameter tuning 2.3 Transfer learning 2.3.1 Definition and purpose of transfer learning 2.4 Summary of the Background 3 Related work 3.1 Questions to transfer learning 3.2 When to transfer: Existing approaches to determining the expediency of knowledge transfer 3.2.1 Meta-features-based approaches 3.2.2 Surrogate-model-based similarity 3.2.3 Relative landmarks-based approaches 3.2.4 Sampling landmarks-based approaches 3.2.5 Similarity threshold problem 3.3 What to transfer: Existing approaches to knowledge transfer 3.3.1 Ensemble learning 3.3.2 Search space pruning 3.3.3 Multi-task learning 3.3.4 Surrogate model recommendation 3.3.5 Few-shot learning 3.3.6 Other approaches to transferring knowledge 3.4 How to transfer (discussion): Peculiarities and required design decisions for the TL implementation in multi-surrogate-model setup 3.4.1 Peculiarities of model recommendation in multi-surrogate-model setup 3.4.2 Required design decisions in multi-task learning 3.4.3 Few-shot learning problem 3.5 Summary of the related work analysis 4 Transfer learning for multi-surrogate-model optimization 4.1 Expediency of knowledge transfer 4.1.1 Experiments’ similarity definition as a variability point 4.1.2 Clustering to filter the most suitable experiments 4.2 Dynamic model recommendation in multi-surrogate-model setup 4.2.1 Variable recommendation granularity 4.2.2 Model recommendation by time and performance criteria 4.3 Multi-task learning 4.4 Implementation of the proposed concept 4.5 Conclusion of the proposed concept 5 Evaluation 5.1 Benchmark suite 5.1.1 APSP for the meta-heuristics 5.1.2 Hyperparameter optimization of the Random Forest algorithm 5.2 Environment setup 5.3 Evaluation plan 5.4 Baseline evaluation 5.5 Meta-tuning for a multi-task learning approach 5.5.1 Revealing the dependencies between the parameters of multi-task learning and its performance 5.5.2 Multi-task learning performance with the best found parameters 5.6 Expediency determination approach 5.6.1 Expediency determination as a variability point 5.6.2 Flexible number of the most similar experiments with the help of clustering 5.6.3 Influence of the number of initial samples on the quality of expediency determination 5.7 Multi-surrogate-model recommendation 5.8 Few-shot learning 5.8.1 Transfer of the built surrogate models’ combination 5.8.2 Transfer of the best configuration 5.8.3 Transfer from different experiment instances 5.9 Summary of the evaluation results 6 Conclusion and Future wor

    4th International Black Sea Conference on Communications and Networking (BlackSeaCom 2016)

    No full text
    This is a pre-print of an article published in "2016 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom 2016)". The final authenticated version is available online at: https://doi.org/10.1109/BlackSeaCom.2016.7901587.The Internet-of-Things (IoT) is penetrating in every sphere of a human life simplifying his everyday life and rising his labor efficiency. There are a lot of challenges on its way of development. Additional challenges appear when considering the IoT systems with nomadic units. The paper is dedicated to the problem of data collecting, processing and storing in the IoT system containing nomadic units. One of the difficulties in such systems' developing is a problem of data volumes reducing for its transferring. For system efficiency improving it is proposed to use an additional transitional hub interacting with the Cloud services
    corecore